Search Results: "uwe"

31 July 2011

Uwe Hermann: The FONIC Surf-Stick, Huawei E1750 HSPA USB modem, on Debian GNU/Linux via usb_modeswitch and wvdial

FONIC Surf-Stick, Huawei E1750, package
I recently got myself a FONIC account for mobile Internet. This (German) prepaid-provider offers a "daily flatrate" for 2.50 per day. After the 10th day of usage (i.e., 25 ) you don't pay any more. This means, even if you need mobile Internet access 31 days a month, you only pay for 10 days. After 500MB/day or 5GB/month you're throttled down to GPRS speed (but you can still connect, and you don't pay more). The FONIC account comes with the "FONIC Surf-Stick", a Huawei E1750 HSPA USB modem (apparently it supports GPRS, EDGE, UMTS, HSDPA (up to 7.2Mbit/s), HSUPA (up to 5.76 Mbit/s), and a SIM card. In order to use the device on Linux you need two packages, usb_modeswitch and wvdial:
  $ apt-get install usb-modeswitch wvdial
Recent versions of usb_modeswitch (and matching udev entries) already support the Huawei E1750 out of the box, a few seconds after attaching the device it's automatically switched into modem mode. After this has been done you should have three new serial devices, usually /dev/ttyUSB0, /dev/ttyUSB1, and /dev/ttyUSB2. You'll need /dev/ttyUSB0 for talking to the device using AT commands. The lsusb output should look like this (see here for full lsusb -vvv):
  $ lsusb
  Bus 001 Device 045: ID 12d1:1436 Huawei Technologies Co., Ltd. 
(before usb_modeswitch was run, the USB IDs were 12d1:1446) FONIC Surf-Stick, Huawei E1750, front The required settings for connecting are documented at fonic.de, specifically the APN (pinternet.interkom.de). A username and/or password is not required. You need to provide your FONIC PIN though. Dialing is done using the *99# number and using the ATDT command. I'm using the following wvdial config file:
  $ cat /etc/wvdial.conf
  [Dialer Defaults]
  Modem = /dev/ttyUSB0
  Baud = 460800
  [Dialer pin]
  Init1 = AT+CPIN=1234
  [Dialer fonic]
  Phone = *99#
  Username = foo
  Password = foo
  Stupid Mode = 1
  Dial Command = ATDT
  Init2 = ATZ
  Init3 = AT+CGDCONT=1,"IP","pinternet.interkom.de"
FONIC Surf-Stick, Huawei E1750, back For mobile Internet access you would do the following:
  1. Attach the device via USB, wait a few seconds to let usb_modeswitch do its magic.
  2. Run wvdial pin and wait a few seconds (until the prompt returns):
      $ wvdial pin
      --> WvDial: Internet dialer version 1.61
      --> Initializing modem.
      --> Sending: AT+CPIN=1234
      AT+CPIN=1234
      OK
      --> Modem initialized.
      --> Configuration does not specify a valid phone number.
      --> Configuration does not specify a valid login name.
      --> Configuration does not specify a valid password.
    
  3. Run wvdial fonic and wait until the "CONNECT" message appears and you get DNS addresses:
      $ wvdial fonic
      --> WvDial: Internet dialer version 1.61
      --> Initializing modem.
      --> Sending: ATZ
      ATZ
      OK
      --> Sending: ATZ
      ATZ
      OK
      --> Sending: AT+CGDCONT=1,"IP","pinternet.interkom.de"
      AT+CGDCONT=1,"IP","pinternet.interkom.de"
      OK
      --> Modem initialized.
      --> Sending: ATDT*99#
      --> Waiting for carrier.
      ATDT*99#
      CONNECT
      --> Carrier detected.  Starting PPP immediately.
      --> Starting pppd at Mon Aug  1 xx:xx:xx 2011
      --> Pid of pppd: 18672
      --> Using interface ppp0
      --> local  IP address xxx.xxx.xxx.xxx
      --> remote IP address yyy.yyy.yyy.yyy
      --> primary   DNS address 193.189.244.225
      --> secondary DNS address 193.189.244.206
    
If everything worked fine you should now have connected successfully. There are other alternatives for achieving the same result, including umtsmon (Qt3 in the last release from 2009, looks a bit unmaintained), kppp, the GNOME NetworkManager, and others, but wvdial worked OK for me. For more details about the Huawei E1750 device (e.g. lsusb -vvv and more photos), see my wiki page at http://randomprojects.org/wiki/FONIC_Surf-Stick_Huawei_E1750 Update 2011-08-03: My measured download speed for a Debian ISO (over HTTP via wget, at night, roughly 22:00 o'clock) is 350-470 KB/s in case anyone is interested. During this download the blue LED on the stick was enabled, which denotes a UMTS connection (green == GPRS/EDGE, turquoise == HSDPA).

28 July 2011

Uwe Hermann: Testing stuff with QEMU - Part 4: Debian GNU/Linux on PowerPC

Debian PowerPC in QEMU, screenshot 1
Debian PowerPC in QEMU, screenshot 2
It's been a while since my last blog post, and also quite a while since my last item in the "Testing stuff with QEMU" series, so here goes. I'm using this QEMU image to do compile-tests on the PowerPC architecture for various software projects, especially flashrom (open-source flash ROM programming software). So here's how to install Debian GNU/Linux on PowerPC (in QEMU):
  1. Install QEMU:
    $ apt-get install qemu
  2. Create a (resizable) image which will hold the installed OS. Use the relatively new "qcow2" QEMU image format, which will only take up as much space as is really needed and has some other nice features (compression, encryption).
    $ qemu-img create -f qcow2 debian_powerpc.qcow2 2G
  3. Download a Debian installer ISO for PowerPC:
    $ wget http://cdimage.debian.org/cdimage/archive/5.0.8/powerpc/iso-cd/debian-508-powerpc-netinst.iso
    Note: For some reason, the current Debian stable 6.0.2.1 ISO didn't work for me (red screen with undefined error during the install; didn't look into the issue, yet). Using an older 5.0.8 image worked fine.
  4. Install Debian in the QEMU image:
    $ qemu-system-ppc -hda debian_powerpc.qcow2 -boot d -cdrom debian-508-powerpc-netinst.iso
    The installation is nothing really special, you'll know almost everything from your usual x86 installation procedure. Note that you have to use "qemu-system-ppc" (not your usual "qemu"), of course.
  5. After the install has finished, shut down QEMU; from now on you can boot it with:
    $ qemu-system-ppc -hda debian_powerpc.qcow2
See the screenshots for some system info. By default an OpenBIOS firmware and the quik bootloader is used, the emulated "machine" is g3beige (Heathrow based PowerMAC). You can use QEMU's -M and -cpu options to select different machines or CPUs. Hope this helps.

4 April 2011

Dirk Eddelbuettel: RQuantLib 0.3.7

A build-fix release RQuantLib 0.3.7 is now on CRAN and in Debian. RQuantLib combines (some of) the quantitative analytics of QuantLib with the R statistical computing environment and language. Thanks to the help by Brian Ripley (who compiled QuantLib for 64 bit Windows), Josh Ulrich (who did the same for 32 bit Windows, and arranged the Windows builds) and Uwe Ligges (who runs win-builder for R) we once again have Windows binaries as well as the usual source distribution (and Debian binaries). The only other change was minor fix to the documentation files. We had found that the pdf reference manual build would break for Uwe and Kurt (using A4 paper settings) but not myself (using letter). Uwe finally tracked that down: we had some arguments to \url with over seventy characters, and that broke typesetting. I commented those out (as the entries were in doxygen-generated QuantLib page which have volatile names anyway) and fully automated builds now resume as usual. Thanks again to Uwe for that too. No other changes were made. Thanks to CRANberries, there is also a diff to the previous release 0.3.6. Full changelog details, examples and more details about this package are at my RQuantLib page.

1 March 2011

Kai Wasserb ch: Weather Forecasts In A KDE Environment

As Squeeze was just released, some might be on the lookout for a replacement for LiquidWeather++, a SuperKaramba script. Others might just search for the first time for a way to have a weather forecast displayed on their desktop. With my yaWP maintainer hat on, I'd like to recommend yaWP for the job. Why yaWP? Short answer: because it's the best (SCNR). Longer answer: because yaWP is easy to use, yet highly customizable. yaWP can track the forecasts of several cities for you, you can put yaWP on your desktop or in your control bar. You can limit the amount of days displayed (in the control bar mode). It's localized for many languages (if your language isn't among the already existing ones, please consider translating yaWP). yaWP can work with multiple services, is themeable and can display a satellite image for your area. Now, some of you might still (don't ask me why) prefer a different Plasmoid for displaying the weather forecasts. But even those can benefit from yaWP and use one of its three data engines (AccuWeather, Google Weather Service and Weather Underground (Wunderground)), thanks to KDE's Weather Ions. Ok, and how does it look? Please remember, that yaWP can be themed, so the picture below just shows one option of many. A screenshot of yaWP displaying data
Included from screenshots.debian.org (License: GPL2 or later + OpenSSL exception)

3 November 2010

Matt Zimmerman: Weathering the Ubuntu brainstorm

In our first few years, Ubuntu experienced explosive growth, from zero to millions of users. Because Ubuntu is an open project, these people don t just use Ubuntu, but can see what s happening next and influence it through suggestions and contributions. The volume of suggestions quickly became unmanageable through ad hoc discussion, because the volume of feedback overwhelmed the relatively few people who were actively developing Ubuntu.

Ubuntu Brainstorm logo In order to better manage user feedback at this scale, Ubuntu Brainstorm was created in 2008. It s a collaborative filtering engine which allows anyone to contribute an idea, and have it voted on by others. Since then, it s been available to Ubuntu developers and leaders as an information source, which has been used in various ways. The top ideas are printed in the Ubuntu Weekly Newsletter each week. We experimented with producing a report each release cycle and sharing it with the developer community. People have been encouraged to take these suggestions to the Ubuntu Developer Summits. We continue to look for new and better ways to process the feedback provided by the user community. Most recently, I asked my colleagues on the Ubuntu Technical Board in a meeting whether we should take responsibility for responding to the feedback available in Ubuntu Brainstorm. They agreed that this was worth exploring, and I put forward a proposal for how it might work. The proposal was unanimously accepted at a later meeting, and I m working on the first feedback cycle now. In short, the Technical Board will ensure that, every three months, the highest voted topics on Ubuntu Brainstorm receive an official response from the Ubuntu project. The Technical Board won t respond to all of them personally, but will identify subject matter experts within the project, ask them to write a short response, and compile these responses for publication. My hope is that this approach will bring more visibility to common user concerns, help users understand what we re doing with their feedback, and generally improve transparency in Ubuntu. We ve already selected the topics for the first iteration based on the most popular items of the past six months, and are organizing responses now. Please visit brainstorm.ubuntu.com and cast your votes for next time!

15 October 2010

Enrico Zini: Award winning code

Award winning code Me and Yuwei had a fun day at hhhmcr (#hhhmcr) and even managed to put together a prototype that won the first prize \o/ We played with the gmp24 dataset kindly extracted from Twitter by Michael Brunton-Spall of the Guardian into a convenient JSON dataset. The idea was to find ways of making it easier to look at the data and making sense of it. This is the story of what we did, including the code we wrote. The original dataset has several JSON files, so the first task was to put them all together:
#!/usr/bin/python
# Merge the JSON data
# (C) 2010 Enrico Zini <enrico@enricozini.org>
# License: WTFPL version 2 (http://sam.zoy.org/wtfpl/)
import simplejson
import os
res = []
for f in os.listdir("."):
    if not f.startswith("gmp24"): continue
    data = open(f).read().strip()
    if data == "[]": continue
    parsed = simplejson.loads(data)
    res.extend(parsed)
print simplejson.dumps(res)
The results however were not ordered by date, as GMP had to use several accounts to twit because Twitter was putting Greather Manchester Police into jail for generating too much traffic. There would be quite a bit to write about that, but let's stick to our work. Here is code to sort the JSON data by time:
#!/usr/bin/python
# Sort the JSON data
# (C) 2010 Enrico Zini <enrico@enricozini.org>
# License: WTFPL version 2 (http://sam.zoy.org/wtfpl/)
import simplejson
import sys
import datetime as dt
all_recs = simplejson.load(sys.stdin)
all_recs.sort(key=lambda x: dt.datetime.strptime(x["created_at"], "%a %b %d %H:%M:%S +0000 %Y"))
simplejson.dump(all_recs, sys.stdout)
I then wanted to play with Tf-idf for extracting the most important words of every tweet:
#!/usr/bin/python
# tfifd - Annotate JSON elements with Tf-idf extracted keywords
#
# Copyright (C) 2010  Enrico Zini <enrico@enricozini.org>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program.  If not, see <http://www.gnu.org/licenses/>.
import sys, math
import simplejson
import re
# Read all the twits
records = simplejson.load(sys.stdin)
# All the twits by ID
byid = dict(((x["id"], x) for x in records))
# Stopwords we ignore
stopwords = set(["by", "it", "and", "of", "in", "a", "to"])
# Tokenising engine
re_num = re.compile(r"^\d+$")
re_word = re.compile(r"(\w+)")
def tokenise(tweet):
    "Extract tokens from a tweet"
    for tok in tweet["text"].split():
        tok = tok.strip().lower()
        if re_num.match(tok): continue
        mo = re_word.match(tok)
        if not mo: continue
        if mo.group(1) in stopwords: continue
        yield mo.group(1)
# Extract tokens from tweets
tokenised = dict(((x["id"], list(tokenise(x))) for x in records))
# Aggregate token counts
aggregated =  
for d in byid.iterkeys():
    for t in tokenised[d]:
        if t in aggregated:
            aggregated[t] += 1
        else:
            aggregated[t] = 1
def tfidf(doc, tok):
    "Compute TFIDF score of a token in a document"
    return doc.count(tok) * math.log(float(len(byid)) / aggregated[tok])
# Annotate tweets with keywords
res = []
for name, tweet in byid.iteritems():
    doc = tokenised[name]
    keywords = sorted(set(doc), key=lambda tok: tfidf(doc, tok), reverse=True)[:5]
    tweet["keywords"] = keywords
    res.append(tweet)
simplejson.dump(res, sys.stdout)
I thought this was producing a nice summary of every tweet but nobody was particularly interested, so we moved on to adding categories to tweet. Thanks to Yuwei who put together some useful keyword sets, we managed to annotate each tweet with a place name (i.e. "Stockport"), a social place name (i.e. "pub", "bank") and a social category (i.e. "man", "woman", "landlord"...) The code is simple; the biggest work in it was the dictionary of keywords:
#!/usr/bin/python
# categorise - Annotate JSON elements with categories
#
# Copyright (C) 2010  Enrico Zini <enrico@enricozini.org>
# Copyright (C) 2010  Yuwei Lin <yuwei@ylin.org>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program.  If not, see <http://www.gnu.org/licenses/>.
import sys, math
import simplejson
import re
# Electoral wards from http://en.wikipedia.org/wiki/List_of_electoral_wards_in_Greater_Manchester
placenames = ["Altrincham", "Sale West",
"Altrincham", "Ashton upon Mersey", "Bowdon", "Broadheath", "Hale Barns", "Hale Central", "St Mary", "Timperley", "Village",
"Ashton-under-Lyne",
"Ashton Hurst", "Ashton St Michael", "Ashton Waterloo", "Droylsden East", "Droylsden West", "Failsworth East", "Failsworth West", "St Peter",
"Blackley", "Broughton",
"Broughton", "Charlestown", "Cheetham", "Crumpsall", "Harpurhey", "Higher Blackley", "Kersal",
"Bolton North East",
"Astley Bridge", "Bradshaw", "Breightmet", "Bromley Cross", "Crompton", "Halliwell", "Tonge with the Haulgh",
"Bolton South East",
"Farnworth", "Great Lever", "Harper Green", "Hulton", "Kearsley", "Little Lever", "Darcy Lever", "Rumworth",
"Bolton West",
"Atherton", "Heaton", "Lostock", "Horwich", "Blackrod", "Horwich North East", "Smithills", "Westhoughton North", "Chew Moor", "Westhoughton South",
"Bury North",
"Church", "East", "Elton", "Moorside", "North Manor", "Ramsbottom", "Redvales", "Tottington",
"Bury South",
"Besses", "Holyrood", "Pilkington Park", "Radcliffe East", "Radcliffe North", "Radcliffe West", "St Mary", "Sedgley", "Unsworth",
"Cheadle",
"Bramhall North", "Bramhall South", "Cheadle", "Gatley", "Cheadle Hulme North", "Cheadle Hulme South", "Heald Green", "Stepping Hill",
"Denton", "Reddish",
"Audenshaw", "Denton North East", "Denton South", "Denton West", "Dukinfield", "Reddish North", "Reddish South",
"Hazel Grove",
"Bredbury", "Woodley", "Bredbury Green", "Romiley", "Hazel Grove", "Marple North", "Marple South", "Offerton",
"Heywood", "Middleton",
"Bamford", "Castleton", "East Middleton", "Hopwood Hall", "Norden", "North Heywood", "North Middleton", "South Middleton", "West Heywood", "West Middleton",
"Leigh",
"Astley Mosley Common", "Atherleigh", "Golborne", "Lowton West", "Leigh East", "Leigh South", "Leigh West", "Lowton East", "Tyldesley",
"Makerfield",
"Abram", "Ashton", "Bryn", "Hindley", "Hindley Green", "Orrell", "Winstanley", "Worsley Mesnes",
"Manchester Central",
"Ancoats", "Clayton", "Ardwick", "Bradford", "City Centre", "Hulme", "Miles Platting", "Newton Heath", "Moss Side", "Moston",
"Manchester", "Gorton",
"Fallowfield", "Gorton North", "Gorton South", "Levenshulme", "Longsight", "Rusholme", "Whalley Range",
"Manchester", "Withington",
"Burnage", "Chorlton", "Chorlton Park", "Didsbury East", "Didsbury West", "Old Moat", "Withington",
"Oldham East", "Saddleworth",
"Alexandra", "Crompton", "Saddleworth North", "Saddleworth South", "Saddleworth West", "Lees", "St James", "St Mary", "Shaw", "Waterhead",
"Oldham West", "Royton",
"Chadderton Central", "Chadderton North", "Chadderton South", "Coldhurst", "Hollinwood", "Medlock Vale", "Royton North", "Royton South", "Werneth",
"Rochdale",
"Balderstone", "Kirkholt", "Central Rochdale", "Healey", "Kingsway", "Littleborough Lakeside", "Milkstone", "Deeplish", "Milnrow", "Newhey", "Smallbridge", "Firgrove", "Spotland", "Falinge", "Wardle", "West Littleborough",
"Salford", "Eccles",
"Claremont", "Eccles", "Irwell Riverside", "Langworthy", "Ordsall", "Pendlebury", "Swinton North", "Swinton South", "Weaste", "Seedley",
"Stalybridge", "Hyde",
"Dukinfield Stalybridge", "Hyde Godley", "Hyde Newton", "Hyde Werneth", "Longdendale", "Mossley", "Stalybridge North", "Stalybridge South",
"Stockport",
"Brinnington", "Central", "Davenport", "Cale Green", "Edgeley", "Cheadle Heath", "Heatons North", "Heatons South", "Manor",
"Stretford", "Urmston",
"Bucklow-St Martins", "Clifford", "Davyhulme East", "Davyhulme West", "Flixton", "Gorse Hill", "Longford", "Stretford", "Urmston",
"Wigan",
"Aspull New Springs Whelley", "Douglas", "Ince", "Pemberton", "Shevington with Lower Ground", "Standish with Langtree", "Wigan Central", "Wigan West",
"Worsley", "Eccles South",
"Barton", "Boothstown", "Ellenbrook", "Cadishead", "Irlam", "Little Hulton", "Walkden North", "Walkden South", "Winton", "Worsley",
"Wythenshawe", "Sale East",
"Baguley", "Brooklands", "Northenden", "Priory", "Sale Moor", "Sharston", "Woodhouse Park"]
# Manual coding from Yuwei
placenames.extend(["City centre", "Tameside", "Oldham", "Bury", "Bolton",
"Trafford", "Pendleton", "New Moston", "Denton", "Eccles", "Leigh", "Benchill",
"Prestwich", "Sale", "Kearsley", ])
placenames.extend(["Trafford", "Bolton", "Stockport", "Levenshulme", "Gorton",
"Tameside", "Blackley", "City centre", "Airport", "South Manchester",
"Rochdale", "Chorlton", "Uppermill", "Castleton", "Stalybridge", "Ashton",
"Chadderton", "Bury", "Ancoats", "Whalley Range", "West Yorkshire",
"Fallowfield", "New Moston", "Denton", "Stretford", "Eccles", "Pendleton",
"Leigh", "Altrincham", "Sale", "Prestwich", "Kearsley", "Hulme", "Withington",
"Moss Side", "Milnrow", "outskirt of Manchester City Centre", "Newton Heath",
"Wythenshawe", "Mancunian Way", "M60", "A6", "Droylesden", "M56", "Timperley",
"Higher Ince", "Clayton", "Higher Blackley", "Lowton", "Droylsden",
"Partington", "Cheetham Hill", "Benchill", "Longsight", "Didsbury",
"Westhoughton"])
# Social categories from Yuwei
soccat = ["man", "woman", "men", "women", "youth", "teenager", "elderly",
"patient", "taxi driver", "neighbour", "male", "tenant", "landlord", "child",
"children", "immigrant", "female", "workmen", "boy", "girl", "foster parents",
"next of kin"]
for i in range(100):
    soccat.append("%d-year-old" % i)
    soccat.append("%d-years-old" % i)
# Types of social locations from Yuwei
socloc = ["car park", "park", "pub", "club", "shop", "premises", "bus stop",
"property", "credit card", "supermarket", "garden", "phone box", "theatre",
"toilet", "building site", "Crown court", "hard shoulder", "telephone kiosk",
"hotel", "restaurant", "cafe", "petrol station", "bank", "school",
"university"]
extras =   "placename": placenames, "soccat": soccat, "socloc": socloc  
# Normalise keyword lists
for k, v in extras.iteritems():
    # Remove duplicates
    v = list(set(v))
    # Sort by length
    v.sort(key=lambda x:len(x), reverse=True)
# Add keywords
def add_categories(tweet):
    text = tweet["text"].lower()
    for field, categories in extras.iteritems():
        for cat in categories:
            if cat.lower() in text:
                tweet[field] = cat
                break
    return tweet
# Read all the twits
records = (add_categories(x) for x in simplejson.load(sys.stdin))
simplejson.dump(list(records), sys.stdout)
All these scripts form a nice processing chain: each script takes a list of JSON records, adds some bit and passes it on. In order to see what we have so far, here is a simple script to convert the JSON twits to CSV so they can be viewed in a spreadsheet:
#!/usr/bin/python
# Convert the JSON twits to CSV
# (C) 2010 Enrico Zini <enrico@enricozini.org>
# License: WTFPL version 2 (http://sam.zoy.org/wtfpl/)
import simplejson
import sys
import csv
rows = ["id", "created_at", "text", "keywords", "placename"]
writer = csv.writer(sys.stdout)
for rec in simplejson.load(sys.stdin):
    rec["keywords"] = " ".join(rec["keywords"])
    rec["placename"] = rec.get("placename", "")
    writer.writerow([rec[row] for row in rows])
At this point we were coming up with lots of questions: "were there more reports on women or men?", "which place had most incidents?", "what were the incidents involving animals?"... Time to bring Xapian into play. This script reads all the JSON tweets and builds a Xapian index with them:
#!/usr/bin/python
# toxapian - Index JSON tweets in Xapian
#
# Copyright (C) 2010  Enrico Zini <enrico@enricozini.org>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program.  If not, see <http://www.gnu.org/licenses/>.
import simplejson
import sys
import os, os.path
import xapian
DBNAME = sys.argv[1]
db = xapian.WritableDatabase(DBNAME, xapian.DB_CREATE_OR_OPEN)
stemmer = xapian.Stem("english")
indexer = xapian.TermGenerator()
indexer.set_stemmer(stemmer)
indexer.set_database(db)
data = simplejson.load(sys.stdin)
for rec in data:
    doc = xapian.Document()
    doc.set_data(str(rec["id"]))
    indexer.set_document(doc)
    indexer.index_text_without_positions(rec["text"])
    # Index categories as categories
    if "placename" in rec:
        doc.add_boolean_term("XP" + rec["placename"].lower())
    if "soccat" in rec:
        doc.add_boolean_term("XS" + rec["soccat"].lower())
    if "socloc" in rec:
        doc.add_boolean_term("XL" + rec["socloc"].lower())
    db.add_document(doc)
db.flush()
# Also save the whole dataset so we know where to find it later if we want to
# show the details of an entry
simplejson.dump(data, open(os.path.join(DBNAME, "all.json"), "w"))
And this is a simple command line tool to query to the database:
#!/usr/bin/python
# xgrep - Command line tool to query the GMP24 tweet Xapian database
#
# Copyright (C) 2010  Enrico Zini <enrico@enricozini.org>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program.  If not, see <http://www.gnu.org/licenses/>.
import simplejson
import sys
import os, os.path
import xapian
DBNAME = sys.argv[1]
db = xapian.Database(DBNAME)
stem = xapian.Stem("english")
qp = xapian.QueryParser()
qp.set_default_op(xapian.Query.OP_AND)
qp.set_database(db)
qp.set_stemmer(stem)
qp.set_stemming_strategy(xapian.QueryParser.STEM_SOME)
qp.add_boolean_prefix("place", "XP")
qp.add_boolean_prefix("soc", "XS")
qp.add_boolean_prefix("loc", "XL")
query = qp.parse_query(sys.argv[2],
    xapian.QueryParser.FLAG_BOOLEAN  
    xapian.QueryParser.FLAG_LOVEHATE  
    xapian.QueryParser.FLAG_BOOLEAN_ANY_CASE  
    xapian.QueryParser.FLAG_WILDCARD  
    xapian.QueryParser.FLAG_PURE_NOT  
    xapian.QueryParser.FLAG_SPELLING_CORRECTION  
    xapian.QueryParser.FLAG_AUTO_SYNONYMS)
enquire = xapian.Enquire(db)
enquire.set_query(query)
count = 40
matches = enquire.get_mset(0, count)
estimated = matches.get_matches_estimated()
print "%d/%d results" % (matches.size(), estimated)
data = dict((str(x["id"]), x) for x in simplejson.load(open(os.path.join(DBNAME, "all.json"))))
for m in matches:
    rec = data[m.document.get_data()]
    print rec["text"]
print "%d/%d results" % (matches.size(), matches.get_matches_estimated())
total = db.get_doccount()
estimated = matches.get_matches_estimated()
print "%d results over %d documents, %d%%" % (estimated, total, estimated * 100 / total)
Neat! Now that we have a proper index that supports all sort of cool things, like stemming, tag clouds, full text search with complex queries, lookup of similar documents, suggest keywords and so on, it was just fair to put together a web service to share it with other people at the event. It helped that I had already written similar code for apt-xapian-index and dde before. Here is the server, quickly built on bottle. The very last line starts the server and it is where you can configure the listening interface and port.
#!/usr/bin/python
# xserve - Make the GMP24 tweet Xapian database available on the web
#
# Copyright (C) 2010  Enrico Zini <enrico@enricozini.org>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program.  If not, see <http://www.gnu.org/licenses/>.
import bottle
from bottle import route, post
from cStringIO import StringIO
import cPickle as pickle
import simplejson
import sys
import os, os.path
import xapian
import urllib
import math
bottle.debug(True)
DBNAME = sys.argv[1]
QUERYLOG = os.path.join(DBNAME, "queries.txt")
data = dict((str(x["id"]), x) for x in simplejson.load(open(os.path.join(DBNAME, "all.json"))))
prefixes =   "place": "XP", "soc": "XS", "loc": "XL"  
prefix_desc =   "place": "Place name", "soc": "Social category", "loc": "Social location"  
db = xapian.Database(DBNAME)
stem = xapian.Stem("english")
qp = xapian.QueryParser()
qp.set_default_op(xapian.Query.OP_AND)
qp.set_database(db)
qp.set_stemmer(stem)
qp.set_stemming_strategy(xapian.QueryParser.STEM_SOME)
for k, v in prefixes.iteritems():
    qp.add_boolean_prefix(k, v)
def make_query(qstring):
    return qp.parse_query(qstring,
        xapian.QueryParser.FLAG_BOOLEAN  
        xapian.QueryParser.FLAG_LOVEHATE  
        xapian.QueryParser.FLAG_BOOLEAN_ANY_CASE  
        xapian.QueryParser.FLAG_WILDCARD  
        xapian.QueryParser.FLAG_PURE_NOT  
        xapian.QueryParser.FLAG_SPELLING_CORRECTION  
        xapian.QueryParser.FLAG_AUTO_SYNONYMS)
@route("/")
def index():
    query = urllib.unquote_plus(bottle.request.GET.get("q", ""))
    out = StringIO()
    print >>out, '''
<html>
<head>
<title>Query</title>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script>
<script type="text/javascript">
$(function() 
    $("#queryfield")[0].focus()
 )
</script>
</head>
<body>
<h1>Search</h1>
<form method="POST" action="/query">
Keywords: <input type="text" name="query" value="%s" id="queryfield">
<input type="submit">
<a href="http://xapian.org/docs/queryparser.html">Help</a>
</form>''' % query
    print >>out, '''
<p>Example: "car place:wigan"</p>

<p>Available prefixes:</p>

<ul>
'''
    for pfx in prefixes.keys():
        print >>out, "<li><a href='/catinfo/%s'>%s - %s</a></li>" % (pfx, pfx, prefix_desc[pfx])
    print >>out, '''
</ul>
'''
    oldqueries = []
    if os.path.exists(QUERYLOG):
        total = db.get_doccount()
        fd = open(QUERYLOG, "r")
        while True:
            try:
                q = pickle.load(fd)
            except EOFError:
                break
            oldqueries.append(q)
        fd.close()
        def print_query(q):
            count = q["count"]
            print >>out, "<li><a href='/query?query=%s'>%s (%d/%d %.2f%%)</a></li>" % (urllib.quote_plus(q["q"]), q["q"], count, total, count * 100.0 / total)
        print >>out, "<p>Last 10 queries:</p><ul>"
        for q in oldqueries[:-10:-1]:
            print_query(q)
        print >>out, "</ul>"
        # Remove duplicates
        oldqueries = dict(((x["q"], x) for x in oldqueries)).values()
        print >>out, "<table>"
        print >>out, "<tr><th>10 queries with most results</th><th>10 queries with least results</th></tr>"
        print >>out, "<tr><td>"
        print >>out, "<ul>"
        oldqueries.sort(key=lambda x:x["count"], reverse=True)
        for q in oldqueries[:10]:
            print_query(q)
        print >>out, "</ul>"
        print >>out, "</td><td>"
        print >>out, "<ul>"
        nonempty = [x for x in oldqueries if x["count"] > 0]
        nonempty.sort(key=lambda x:x["count"])
        for q in nonempty[:10]:
            print_query(q)
        print >>out, "</ul>"
        print >>out, "</td></tr>"
        print >>out, "</table>"
    print >>out, '''
</body>
</html>'''
    return out.getvalue()
@route("/query")
@route("/query/")
@post("/query")
@post("/query/")
def query():
    query = bottle.request.POST.get("query", bottle.request.GET.get("query", ""))
    enquire = xapian.Enquire(db)
    enquire.set_query(make_query(query))
    count = 40
    matches = enquire.get_mset(0, count)
    estimated = matches.get_matches_estimated()
    total = db.get_doccount()
    out = StringIO()
    print >>out, '''
<html>
<head><title>Results</title></head>
<body>
<h1>Results for "<b>%s</b>"</h1>
''' % query
    if estimated == 0:
        print >>out, "No results found."
    else:
        # Give as results the first 30 documents; also use them as the key
        # ones to use to compute relevant terms
        rset = xapian.RSet()
        for m in enquire.get_mset(0, 30):
            rset.add_document(m.document.get_docid())
        # Compute the tag cloud
        class NonTagFilter(xapian.ExpandDecider):
            def __call__(self, term):
                return not term[0].isupper() and not term[0].isdigit()
        cloud = []
        maxscore = None
        for res in enquire.get_eset(40, rset, NonTagFilter()):
            # Normalise the score in the interval [0, 1]
            weight = math.log(res.weight)
            if maxscore == None: maxscore = weight
            tag = res.term
            cloud.append([tag, float(weight) / maxscore])
        max_weight = cloud[0][1]
        min_weight = cloud[-1][1]
        cloud.sort(key=lambda x:x[0])
        def mklink(query, term):
            return "/query?query=%s" % urllib.quote_plus(query + " and " + term)
        print >>out, "<h2>Tag cloud</h2>"
        print >>out, "<blockquote>"
        for term, weight in cloud:
            size = 100 + 100.0 * (weight - min_weight) / (max_weight - min_weight)
            print >>out, "<a href='%s' style='font-size:%d%%; color:brown;'>%s</a>" % (mklink(query, term), size, term)
        print >>out, "</blockquote>"
        print >>out, "<h2>Results</h2>"
        print >>out, "<p><a href='/'>Search again</a></p>"
        print >>out, "<p>%d results over %d documents, %.2f%%</p>" % (estimated, total, estimated * 100.0 / total)
        print >>out, "<p>%d/%d results</p>" % (matches.size(), estimated)
        print >>out, "<ul>"
        for m in matches:
            rec = data[m.document.get_data()]
            print >>out, "<li><a href='/item/%s'>%s</a></li>" % (rec["id"], rec["text"])
        print >>out, "</ul>"
        fd = open(QUERYLOG, "a")
        qinfo = dict(q=query, count=estimated)
        pickle.dump(qinfo, fd)
        fd.close()
    print >>out, '''
<a href="/">Search again</a>

</body>
</html>'''
    return out.getvalue()
@route("/item/:id")
@route("/item/:id/")
def show(id):
    rec = data[id]
    out = StringIO()
    print >>out, '''
<html>
<head><title>Result %s</title></head>
<body>
<h1>Raw JSON record for twit %s</h1>
<pre>''' % (rec["id"], rec["id"])
    print >>out, simplejson.dumps(rec, indent=" ")
    print >>out, '''
</pre>
</body>
</html>'''
    return out.getvalue()
@route("/catinfo/:name")
@route("/catinfo/:name/")
def catinfo(name):
    prefix = prefixes[name]
    out = StringIO()
    print >>out, '''
<html>
<head><title>Values for %s</title></head>
<body>
''' % name
    terms = [(x.term[len(prefix):], db.get_termfreq(x.term)) for x in db.allterms(prefix)]
    terms.sort(key=lambda x:x[1], reverse=True)
    freq_min = terms[0][1]
    freq_max = terms[-1][1]
    def mklink(name, term):
        return "/query?query=%s" % urllib.quote_plus(name + ":" + term)
    # Build tag cloud
    print >>out, "<h1>Tag cloud</h1>"
    print >>out, "<blockquote>"
    for term, freq in sorted(terms[:20], key=lambda x:x[0]):
        size = 100 + 100.0 * (freq - freq_min) / (freq_max - freq_min)
        print >>out, "<a href='%s' style='font-size:%d%%; color:brown;'>%s</a>" % (mklink(name, term), size, term)
    print >>out, "</blockquote>"
    print >>out, "<h1>All terms</h1>"
    print >>out, "<table>"
    print >>out, "<tr><th>Occurrences</th><th>Name</th></tr>"
    for term, freq in terms:
        print >>out, "<tr><td>%d</td><td><a href='/query?query=%s'>%s</a></td></tr>" % (freq, urllib.quote_plus(name + ":" + term), term)
    print >>out, "</table>"
    print >>out, '''
</body>
</html>'''
    return out.getvalue()
# Change here for bind host and port
bottle.run(host="0.0.0.0", port=8024)
...and then we presented our work and ended up winning the contest. This was the story of how we wrote this set of award winning code.

26 August 2010

Uwe Hermann: openbiosprog-spi, a DIY Open Hardware and Free Software USB-based SPI BIOS chip flasher using flashrom

openbiosprog-spi device If you're following me on identi.ca you probably already know that I've been designing a small PCB for a USB-based SPI chip programmer named openbiosprog-spi. The main use-case of the device is to help you recover easily from a failed BIOS upgrade (either due to using an incorrect BIOS image, due to power outages during the flashing progress, or whatever). The device only supports SPI chips, as used in recent mainboards (in DIP-8 form factor, or via manual wiring possibly also soldered-in SO-8 variants). It can identify, read, erase, or write the chips. Of course the whole "toolchain" of software tools I used for creating the hardware is open-source, and the hardware itself (schematics and PCB layouts) are freely released under a Creative Commons license (i.e., it's an "Open Hardware" device). The user-space source code is part of flashrom (GPL, version 2), the schematics and PCB layouts are licensed under the CC-BY-SA 3.0 license and were created using the open-source Kicad EDA suite (GPL, version 2). openbiosprog-spi schematics
openbiosprog-spi Kicad PCB layout The schematics, PCB layouts, and other material is available from gitorious:
  $ git clone git://gitorious.org/openbiosprog/openbiosprog-spi.git
You can also download the final Gerber files (ZIP) for viewing them, or sending them to a PCB manufacturer. Some more design notes:
  • The device uses the FTDI FT2232H chip as basis for USB as well as for handling the actual SPI protocol in hardware (MPSSE engine of the FT2232H).
  • Attaching the SPI chip:
    • There's a DIP-8 socket on the device so you can easily insert the SPI chip you want to read/erase/program.
    • Optionally, if you don't want a DIP-8 socket, you can solder in a pin-header with 8 pins, which allows you to connect the individual pins to the SPI chip via jumper wires or grippers/probes.
  • The PCB board dimensions are 44mm x 20mm, and it's a 2-layer board using mostly 0603 SMD components.
Basic usage example of the device on Linux (or other OSes supported by flashrom):
  $ flashrom -p ft2232_spi:type=2232H,port=A -r backup.bin (reads the current chip contents into a file)
openbiosprog-spi PCBs
openbiosprog-spi parts list Over at the main projects page of openbiosprog-spi at http://randomprojects.org/wiki/Openbiosprog-spi I have put up a lot more photos and information such as the bill of materials, the Kicad settings I used for creating the PCBs, the Gerber files and the Excellon drill files and so on. The first few prototype boards I ordered at PCB-POOL.COM (but you can use any other PCB manufacturer of course), the bill of materials (BOM) lists the Mouser and CSD electronics part numbers and prices, but you can also buy the stuff elsewhere, of course (Digikey, Farnell, whatever). I already hand-soldered one or two prototypes and tested the device. Both hardware and software worked fine basically, you just need a small one-liner patch to fix an issue in flashrom, but that should be merged upstream soonish. In order to make it easy for interested users to get the PCBs I'll probably make them available in the BatchPCB Market Place soonish, so you can easily order them from there (you do still need to solder the components though). Note: I'm not making any money off of this, this is a pure hobby project. All in all I have to say that this was a really fun little project, and a useful one too. This was my first hardware project using Kicad (I used gEDA/PCB, also an open-source EDA toolsuite, for another small project) and I must say it worked very nicely. I didn't even have to read any manual really, it was all pretty intuitive. Please consider not using Eagle (or other closed-source PCB software) for your next Open Hardware project, there are at least two viable open-source options (Kicad, gEDA/PCB) which both work just fine.

15 July 2010

Uwe Hermann: Using the HP Pavilion dv7-3127eg laptop with Debian GNU/Linux

HP Pavilion dv7-3127eg Yep, so I bought a new laptop recently, my IBM/Lenovo Thinkpad T40p was slowly getting really unbearably sloooow (Celeron 1.5 GHz, 2 GB RAM max). After comparing some models I set out to buy a certain laptop in a local store, which they didn't have in stock, so I spontaneously got another model, the HP Pavilion dv7-3127eg (HP product number VY554EA). Why this one? Well, the killer feature for me was that it has two SATA disks, hence allows me to run a RAID-1 in my laptop. This allows me to sleep better at night, knowing that the next dying disk will not necessarily lead to data loss (yes, I do still perform regular backups, of course). Other pros: Much faster than the old notebook, this one is an AMD Turion II Dual-Core Mobile M520 at 2.3 GHz per core, it has 4 GB RAM (8 GB max), and uses an AMD RS780 / SB700 chipset which is supported by the Free-Software / Open-Source BIOS / firmware project coreboot, so this might make the laptop a good coreboot-target on the long run. I'll probably start working on that when I'm willing to open / dissect it or when the warranty expires, whichever happens first. Anyway, I set up a page at randomprojects.org which contains lots more details about using Linux on this laptop:
http://randomprojects.org/wiki/HP_Pavilion_dv7-3127eg
Most of the hardware is supported out of the box, though I haven't yet tested everything. There may be issues with suspend-to-disk / suspend-to-RAM, sometimes it seems to hang (may be just a simple config change is needed in /etc/hibernate/disk.cfg). Cons: Pretty big and heavy (but that's OK, I use it mostly as "semi-mobile desktop replacement"), glossy screen, loud fans (probably due to the two disks). For reference, here's an lspci of the box:
  $ lspci -tvnn
  -[0000:00]-+-00.0  Advanced Micro Devices [AMD] RS780 Host Bridge Alternate [1022:9601]
           +-02.0-[01]--+-00.0  ATI Technologies Inc M96 [Mobility Radeon HD 4650] [1002:9480]
                        \-00.1  ATI Technologies Inc RV710/730 [1002:aa38]
           +-04.0-[02-07]--
           +-05.0-[08]----00.0  Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) [168c:002b]
           +-06.0-[09]----00.0  Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller [10ec:8168]
           +-0a.0-[0a]--
           +-11.0  ATI Technologies Inc SB700/SB800 SATA Controller [AHCI mode] [1002:4391]
           +-12.0  ATI Technologies Inc SB700/SB800 USB OHCI0 Controller [1002:4397]
           +-12.1  ATI Technologies Inc SB700 USB OHCI1 Controller [1002:4398]
           +-12.2  ATI Technologies Inc SB700/SB800 USB EHCI Controller [1002:4396]
           +-13.0  ATI Technologies Inc SB700/SB800 USB OHCI0 Controller [1002:4397]
           +-13.1  ATI Technologies Inc SB700 USB OHCI1 Controller [1002:4398]
           +-13.2  ATI Technologies Inc SB700/SB800 USB EHCI Controller [1002:4396]
           +-14.0  ATI Technologies Inc SBx00 SMBus Controller [1002:4385]
           +-14.2  ATI Technologies Inc SBx00 Azalia (Intel HDA) [1002:4383]
           +-14.3  ATI Technologies Inc SB700/SB800 LPC host controller [1002:439d]
           +-14.4-[0b]--
           +-18.0  Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration [1022:1200]
           +-18.1  Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map [1022:1201]
           +-18.2  Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller [1022:1202]
           +-18.3  Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control [1022:1203]
           \-18.4  Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control [1022:1204]
Full lspci -vvvxxxxnnn, lsusb -vvv, and a much more detailed list of tested hardware components is available in the wiki.

25 June 2010

Luke Faraone: Hello, (Planet Debian readers of the) world!

I m a Debian Maintainer currently undergoing the New Maintainer process. I m also an Ubuntu MOTU as of recently. I have several packages in a variety of categories, but I specialize in Python-based software. I m interested in exploring more ways to improve cross-distribution coordination, specifically as it relates to the Debian Sugar packages. I m working to get all of the Ubuntu-specific Sugar packages included in Debian, which will probably be a summer-long effort.

24 June 2010

Uwe Hermann: Using the Oasis UMO19 MCU003 400x USB microscope on Linux via luvcview

Oasis UMO19 MCU003 digital USB microscope I've been buying quite a lot of (usually cheapo) gadgets recently, which I'll probably introduce / review in various blog posts sooner or later. Let me start with a fun little gadget, a digital USB-based microscope. I found out about it via this thread over at lostscrews.com. You can get this (or a very similar device) e.g. on eBay for roughly 50 Euros. Mine seems to be from a company called Oasis (though they're probably just the reseller, not sure). The device doesn't seem to have a nice name, but I can see UMO19 MCU003 on the microscope, so I guess that's the name or model number. It can focus on magnifications of 20x or 400x. The image resolution is said to be a max. of 1600x1200, but in practice most of my images are 640x480, maybe I have to change some settings and/or the resolution depends on the magnification factor and lighting conditions. The device acts as a simple UVC webcam when attached to USB, so you can view the images easily via any compatible webcam software, e.g. luvcview and also save screenshots of the magnified areas (see images). UMO19 chip
UMO19 fabric
UMO19 LED First three from left to right: SMD LED (400x), clothes/jacket (400x), random PCB (20x). The other two below: A via on a PCB (400x), and the "pixels" of a TFT screen (400x). It worked out of the box on Linux for me, the uvcvideo kernel driver was loaded automatically.
 $ lsusb
 Bus 001 Device 013: ID 0ac8:3610 Z-Star Microelectronics Corp.
I set up a wiki page for more details (including full lsusb -vvv) and sample images at: http://randomprojects.org/wiki/Oasis_UMO19_MCU003_USB_microscope I will also post some more images there over the next few days. UMO19 TFT
UMO19 via
This is a really fun device for having a look at stuff you'd normally not see (or not well enough), and also useful for e.g. checking PCB solder joints, checking all kinds of electronics for errors or missing/misaligned parts, finding the chip name / model number of very tiny chips etc. etc. I can also imagine it's quite nice for biological use-cases, e.g. for studying insects, tissue, plants, and so on. Anyway, definately a nice toy for relatively low price, I can highly recommend a device like this. Check eBay (search for e.g. "usb mikroskop 400") and various online shops for similar devices, there seem to be a large number of them with different names and from different vendors. Just make sure it has at least 400x magnification, there are also some with only 80x or 200x which is not as useful as 400x, of course.

8 June 2010

Uwe Hermann: flashrom 0.9.2 released -- Open-Source, crossplatform BIOS / EEPROM / flash chip programmer

The long-pending 0.9.2 version of the open-source, cross-platform, commandline flashrom utility has been released. From the announce:
New major user-visible features:
* Dozens of newly supported mainboards, chipsets and flash chips.
* Support for Dr. Kaiser PC-Waechter PCI devices (FPGA variant).
* Support for flashing SPI chips with the Bus Pirate.
* Support for the Dediprog SF100 external programmer.
* Selective blockwise erase for all flash chips.
* Automatic chip unlocking.
* Support for each programmer can be selected at compile time.
* Generic detection for unknown flash chips.
* Common mainboard features are now detected automatically.
* Mainboard matching via DMI strings.
* Laptop detection which triggers safety measures.
* Test flags for all part of flashrom operation.
* Windows support for USB-based and serial-based programmers.
* NetBSD support.
* DOS support.
* Slightly changed command line invocation. Please see the man page for details. Experimental new features:
* Support for some NVIDIA graphics cards.
* Chip test pattern generation.
* Bit-banging SPI infrastructure.
* Nvidia MCP6*/MCP7* chipset detection.
* Support for Highpoint ATA/RAID controllers. Infrastructural improvements and fixes:
* Lots of cleanups.
* Various bugfixes and workarounds for broken third-party software.
* Better error messages.
* Reliability fixes.
* Adjustable severity level for messages.
* Programmer-specific chip size limitation warnings.
* Multiple builtin frontends for flashrom are now possible.
* Increased strictness in board matching.
* Extensive selfchecks on startup to protect against miscompilation.
* Better timing precision for touchy flash chips.
* Do not rely on Linux kernel bugs for mapping memory.
* Improved documentation.
* Split frontend and backend functionality.
* Print runtime and build environment information.
The list of supported OSes and architectures is slowly getting longer, e.g. these have been tested: Linux, FreeBSD, NetBSD, DragonFly BSD, Nexenta, Solaris and Mac OS X. There's partial support for DOS (no USB/serial flashers) and Windows (no PCI flashers). Initial (partial) PowerPC and MIPS support has been merged, ARM support and other upcoming. Also, the list of external (non-mainboard) programmers increases, e.g. there is support for NICs (3COM, Realtek, SMC, others upcoming), SATA/IDE cards from Silicon Image and Highpoint, some NVIDIA cards, and various USB- or parallelport- or serialport- programmers such as the Busirate, Dediprog SF100, FT2232-based SPI programmers and more. More details at flashrom.org and in the list of supported chips, chipsets, baords, and programmers. I uploaded an svn version slightly more recent than 0.9.2 to Debian unstable, which should reach Debian testing (and Ubuntu I guess) soonish.

31 May 2010

Piotr Galiszewski: Hello Word

Hello Planet Debian readers!

I've never thought that such a thing can ever happen, but I've started blogging ;) So now it is time to introduce myself.

My name is Piotr Galiszewski and I am second year student of computer science at AGH - University of Science and Technology in Krakow (Poland). I have been GNU/Linux user for about 5 years (mostly Debian based distributions).

Thanks for Debian and Google, this summer I will be working on creating Qt-based user interface for aptitude as my Google Summer of Code project. I hope that my mentors Sune Vuorela and Daniel Barrows will be patient with me ;) I am sure I will learn a lot from them. Please look at abstract of my project made by Debian GSoC administrator Obey Arthur Liu:
Qt GUI for aptitude. Currently, KDE users need to use Aptitude via the console interface, or install the newly developed GTK frontend, which does not fit well into KDE desktop. Making Qt frontend to Aptitude would solve this problem and bring an advanced and fully Debian-compliant graphical package manager to KDE.
As I wrote in my proposal I will split my work into three main parts:
  1. writing low level classes which will abstract aptitude signals and slots (which uses sigc++) into Qt slots and signals.
  2. creating and evaluating GUI mockups
  3. implementing GUI on top of classes from the first point
Point 1 and 2 will be made simultaneously and will take all May and half of June. Low level classes should implement all necessary functions for further use in GUI. This classes allow me to avoid direct usage of none Qt code in GUI classes and also give much more time to prepare completed and usability-wise mockups. Every mockups version will be presented and discussed on this blog. First version should be ready in next few days and updates will appear each week (or to weeks).
After finishing this two steps I will start coding GUI. With mockups and finished low level classes this should not be complicated (Yeah, I know that this is only my dream).

Full text of my proposal (including more precise time-line) can be found at Debian wiki.

Currently, project is slightly behind the schedule . It is caused by changes in my studies plan. The Juwanalia students' festival took place earlier, and yesterday it finished. But my first exam will take place one week later on 18 May, so I will have more time to catch up with time-line.

This project is my first direct contribution to Debian, but not first involvement in free software movement. I've been Kadu Instant Messenger developer for more then two year. In last two years I have been second most active developer with more than 700 commits in master branch. During GSoC period my Kadu activities will be limited. If time allows me to do this, I will be still contributing to Kadu. I still can be found at Kadu forum or #kadu channel on irc.freenode.net. I will also continue reviewing patches and fixing low time-consuming bugs.

My plans for the next few days:
  • continue researching aptitude codebase
  • discover all functionalities that should be implemented by Qt frontend
  • review of other graphical package manager GUIs (and probably write about my thoughts on this blog)
  • create first draft of my mockups
If you have any thoughts about this project, please add comment to this post or contact me directly. I will be glad to read all yours opinions

Cheers

PS. As you can see English is not my mother tongue, so please forgive me my mistakes

28 April 2010

Wouter Verhelst: It was a dark and stormy night.

Onze-Lieve-Vrouwestraat The moon shone through the clouds. The sun didn't. The lightning did. Or, well, didn't. The beer flew wildly, and changed the food into something of the past. Discussions arose about lasers, ceramic knifes, and stormy nights. Promises were made that were only kept by some, including me, but not including him. Much was said that mattered little. Little was said that should've been forgotten. Yet much was. He would've been proud.

9 April 2010

Uwe Hermann: coreboot / flashrom in GSOC 2010 -- student application deadline today!

GSoC 2010 logo As you may know there's a Google Summer of Code program again this year. The deadline for student applications is April 9th at 19:00 UTC, so if you're a student and you want to work on a coreboot (open-source BIOS / PC firmware) or flashrom (open-source BIOS chip flasher) project, please apply in time. The following coreboot/flashrom GSOC project ideas have been proposed so far (but you can also suggest your own ideas, of course):
  • Infrastructure for automatic code checking
  • TianoCore on coreboot
  • coreboot port to Marvell ARM SOCs with PCIe
  • coreboot port to AMD 800 series chipsets
  • coreboot mass-porting to AMD 780 series mainboards
  • coreboot panic room
  • coreboot cheap testing rig
  • coreboot GeodeLX port from v3 to v4
  • Drivers for libpayload
  • Board config infrastructure
  • Refactor AMD code
  • Payload infrastructure
  • flashrom: Multiple GUIs for flashrom
  • flashrom: Recovery of dead boards and onboard flash updates
  • flashrom: SPI bitbanging hardware support
  • flashrom: Generic flashrom infrastructure improvements
  • flashrom: Laptop support
See this wiki page for why and how to apply for a coreboot/flashrom project.

6 April 2010

Uwe Hermann: Miro 3.0 released, Debian package available

Miro 3.0 Yep, the new major release, Miro 3.0, of the cross-platform Internet RSS audio/video aggregator and player has been released. Please check the release notes and the feature list for details. Overall more than 139 issues have been fixed since the last 2.x series release. The most notable changes are probably the dropping of xine support upstream (gstreamer is used now for all video/audio on Linux) and the introduction of subtitle support. I have uploaded a new Miro 3.0 Debian package to unstable recently (which have been a delayed a bit due to Debian server issues), by now it should be available from most mirrors. Let me know if there are any issues...

16 March 2010

Dirk Eddelbuettel: Rcpp 0.7.10

Versions 0.7.7 to 0.7.9 of Rcpp contained a bug: protecting paths with quotes was supposed to help with Windows builds, but did the opposite at least in 'backticks mode' for getting path and/or library information. Using the shQuote() function instead helped. Our thanks to the tireless R-on-Windows maintainer Uwe Ligges for an earlier heads-up about the problem. So another quick bug-fix release 0.7.10 is now in Debian and should be on CRAN some time tomorrow. We also put two small improvements in, see the full NEWS entry for this release:
0.7.10  2010-03-15
    o	new class Rcpp::S4 whose constructor checks if the object is an S4 object
	
    o	maximum number of templated arguments to the pairlist function, 
	the DottedPair constructor, the Language constructor and the 
	Pairlist constructor has been updated to 20 (was 5) and a script has been
	added to the source tree should we want to change it again
    o   use shQuotes() to protect Windows path names (which may contain spaces)
As always, even fuller details are in Rcpp Changelog page and the Rcpp page which also leads to the downloads, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page

4 March 2010

Uwe Hermann: libopenstm32 - a Free Software firmware library for STM32 ARM Cortex-M3 microcontrollers

Olimex STM32-H103 eval board I guess it's time to finally announce libopenstm32, a Free Software firmware library for STM32 ARM Cortex-M3 microcontrollers me and a few other people have been working on in recent weeks. The library is licensed under the GNU GPL, version 3 or later (yes, that's an intentional decision after some discussions we had). The code is available via git:
 $ git clone git://libopenstm32.git.sourceforge.net/gitroot/libopenstm32/libopenstm32
 $ cd libopenstm32
 $ make
Building is done using a standard ARM gcc cross-compiler (arm-elf or arm-none-eabi for instance), see the summon-arm-toolchain script for the basic idea about how to build one. The current status of the library is listed in the wiki. In short: some parts of GPIOs, UART, I2C, SPI, RCC, Timers and some other basic stuff works and has register definitions (and some convenience functions, but not too many, yet). We're working on adding support for more subsystems, any help with this is highly welcome of course! Luckily ARM stuff (and especially the STM32) has pretty good (and freely available) datasheets. We have a few simple example programs, e.g. for the Olimex STM32-H103 eval board (see photo). JTAG flashing can be done using OpenOCD, for example. Feel free to join the mailing lists and/or the #libopenstm32 IRC channel on Freenode. The current list of projects where we plan to use this library is Open-BLDC (an Open Hardware / Free Software brushless motor controller project by Piotr Esden-Tempski), openmulticopter (an Open Hardware / Free Software quadrocopter/UAV project), openbiosprog (an Open Hardware / Free Software BIOS chip flash programmer I'm in the process of designing using gEDA/PCB), and probably a few more. If you plan to work on any new (or existing) microcontroller hardware- or software-projects involving an STM32 microcontroller, please consider using libopenstm32 (it's the only Free Software library for this microcontroller family I know of) and help us make it better and more complete. Thanks!

25 February 2010

Uwe Hermann: How to setup an encrypted USB-disk software-RAID-1 on Debian GNU/Linux using mdadm and cryptsetup

This is what I set up for backups recently using a cheap USB-enclosure which can house 2 SATA disks and shows them as 2 USB mass-storage devices to my system (using only one USB cable). Without any further introduction, here goes the HOWTO: First, create one big partition on each of the two disks (/dev/sdc and /dev/sdd in my case) of the exact same size. The cfdisk details are omitted here.
  $ cfdisk /dev/sdc
  $ cfdisk /dev/sdd
Then, create a new RAID array using the mdadm utility:
  $ mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdc1 /dev/sdd1
The array is named md0, consists of the two devices (--raid-devices=2) /dev/sdc1 and /dev/sdd1, and it's a RAID-1 array, i.e. data is simply mirrored on both disks so if one of them fails you don't lose data (--level=1). After this has been done the array will be synchronized so that both disks contain the same data (this process will take a long time). You can watch the current status via:
  $ cat /proc/mdstat
  Personalities : [raid1]
  md0 : active raid1 sdd1[1] sdc1[0]
        1465135869 blocks super 1.1 [2/2] [UU]
        [>....................]  resync =  0.0% (70016/1465135869) finish=2440.6min speed=10002K/sec
  unused devices: 
Some more info is also available from mdadm:
  $ mdadm --detail --scan
  ARRAY /dev/md0 metadata=1.01 name=foobar:0 UUID=1234578:1234578:1234578:1234578
  $ mdadm --detail /dev/md0
  /dev/md0:
          Version : 1.01
    Creation Time : Sat Feb  6 23:58:51 2010
       Raid Level : raid1
       Array Size : 1465135869 (1397.26 GiB 1500.30 GB)
    Used Dev Size : 1465135869 (1397.26 GiB 1500.30 GB)
     Raid Devices : 2
    Total Devices : 2
      Persistence : Superblock is persistent
      Update Time : Sun Feb  7 00:03:21 2010
            State : active, resyncing
   Active Devices : 2
  Working Devices : 2
   Failed Devices : 0
    Spare Devices : 0
   Rebuild Status : 0% complete
             Name : foobar:0  (local to host foobar)
             UUID : 1234578:1234578:1234578:1234578
           Events : 1
      Number   Major   Minor   RaidDevice State
         0       8       33        0      active sync   /dev/sdc1
         1       8       49        1      active sync   /dev/sdd1
Next, you'll want to create a big partition on the RAID device (cfdisk details omitted)...
  $ cfdisk /dev/md0
...and then encrypt all the (future) data on the device using dm-crypt+LUKS and cryptsetup:
  $ cryptsetup --verbose --verify-passphrase luksFormat /dev/md0p1
  Enter your desired pasphrase here (twice)
  $ cryptsetup luksOpen /dev/md0p1 myraid
After opening the encrypted container with cryptsetup luksOpen you can create a filesystem on it (ext3 in my case):
  $ mkfs.ext3 -j -m 0 /dev/mapper/myraid
That's about it. In future you can access the RAID data by using the steps below. Starting the RAID and mouting the drive:
  $ mdadm --assemble /dev/md0 /dev/sdc1 /dev/sdd1
  $ cryptsetup luksOpen /dev/md0p1 myraid
  $ mount -t ext3 /dev/mapper/myraid /mnt
Shutting down the RAID:
  $ umount /mnt
  $ cryptsetup luksClose myraid
  $ mdadm --stop /dev/md0
That's all. Performance is shitty due to all the data being shoved out over one USB cable (and USB itself being too slow for these amounts of data), but I don't care too much about that as this setup is meant for backups, not performance-critical stuff.

9 February 2010

Dirk Eddelbuettel: Rcpp 0.7.5

A new release of our Rcpp R / C++ interface classes is now out, the version number is 0.7.5. It comes on the heels of the release 0.7.4 and keeps with our semi-frantic schedule of releases every ten or so days going. The package is now on CRAN and Debian, and mirrors start to get the new versions. As before, my local page provides more details and Romain's blog is always worth watching too. The changes are summarised below in the NEWS file snippet, more details are in the ChangeLog as well.
0.7.5	2010-02-08
    o 	wrap has been much improved. wrappable types now are :
    	- primitive types : int, double, Rbyte, Rcomplex, float, bool
    	- std::string
    	- STL containers which have iterators over wrappable types:
    	  (e.g. std::vector<t>, std::deque<t>, std::list<t>, etc ...). 
    	- STL maps keyed by std::string, e.g std::map<std::string>
    	- classes that have implicit conversion to SEXP
    	- classes for which the wrap template if fully or partly specialized
    	This allows composition, so for example this class is wrappable: 
    	std::vector< std::map<std::string> > (if T is wrappable)
    	
    o 	The range based version of wrap is now exposed at the Rcpp::
    	level with the following interface : 
    	Rcpp::wrap( InputIterator first, InputIterator last )
    	This is dispatched internally to the most appropriate implementation
    	using traits
    o	a new namespace Rcpp::traits has been added to host the various
    	type traits used by wrap
    o 	The doxygen documentation now shows the examples
    o 	A new file inst/THANKS acknowledges the kind help we got from others
    o	The RcppSexp has been removed from the library.
    
    o 	The methods RObject::asFoo are deprecated and will be removed
    	in the next version. The alternative is to use as<foo>.
    o	The method RObject::slot can now be used to get or set the 
    	associated slot. This is one more example of the proxy pattern
    	
    o	Rcpp::VectorBase gains a names() method that allows getting/setting
    	the names of a vector. This is yet another example of the 
    	proxy pattern.
    	
    o	Rcpp::DottedPair gains templated operator<< and operator>> that 
    	allow wrap and push_back or wrap and push_front of an object
    	
    o	Rcpp::DottedPair, Rcpp::Language, Rcpp::Pairlist are less
    	dependent on C++0x features. They gain constructors with up
    	to 5 templated arguments. 5 was choosed arbitrarily and might 
    	be updated upon request.
    	
    o	function calls by the Rcpp::Function class is less dependent
    	on C++0x. It is now possible to call a function with up to 
    	5 templated arguments (candidate for implicit wrap)
    	
    o	added support for 64-bit Windows (thanks to Brian Ripley and Uwe Ligges)
As always, even fuller details are in the ChangeLog on the Rcpp page which also leads to the downloads, the browseable doxygen docs and zip files of doxygen output for the standard formats. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page

6 February 2010

Uwe Hermann: FOSDEM 2010: coreboot and flashrom devroom and talks

coreboot logo Quick public service announcement (which probably comes a bit too late, sorry): There's a coreboot developer room at this year's FOSDEM (Free and Open-Source Software Developer's European Meeting), which starts roughly... um... today. In 20 minutes, actually. Unfortunately I cannot be there, hopefully there will be video archives of the talks. If you're at FOSDEM already, here's the list of talks:
Sat 13:00-14:00 coreboot introduction (Peter Stuge)
Sat 14:00-15:00 coreboot and PC technical details (Peter Stuge)
Sat 15:00-16:00 ACPI and Suspend/Resume under coreboot (Rudolf Marek)
Sat 16:00-17:00 coreboot board porting (Rudolf Marek)
Sat 17:00-18:00 Flashrom, the universal flash tool (Carl-Daniel Hailfinger)
Sat 18:00-19:00 Flash enable BIOS reverse engineering (Luc Verhaegen)
Highly recommended stuff if you're interested in an open-source BIOS and/or open-source, cross-platform flash EEPROM programmer software.

Next.

Previous.